TOXICOGENOMICS: A Review

 

Patel VD1, Patel MB1, Anand IS1, Patel CN1 Bhatt PA2

1Department of Clinical Pharmacy, Sri Sarvajanik Pharmacy College, Mehsana– 384001;

2L.M. College of Pharmacy, Ahmedabad, Gujarat, India.

 

ABSTRACT:

Toxicogenomics is a rapidly developing discipline that promises to aid scientists in understanding the molecular and cellular effects of chemicals in biological systems. This field encompasses global assessment of biological effects using technologies such as DNA microarrays or high throughput NMR and protein expression analysis .1 Toxicogemomics is the evolving science which measures the global gene expression changes in biological samples exposed to toxic agents and investigates the complex interaction between the genetic variability and environmental exposures on toxicological effects. DNA microarrays have become most popular and important method to measure the expression of mRNA level offering great potential for environmental or toxicological studies. Gene expression changes can possibly provide more sensitive, immediate, comprehensive maker of toxicity than typical toxicological endpoints such as morphological changes, carcinogenicity, and reproductive toxicity. In this regards, toxicogenomics includes genomic-scale mRNA expression (transcriptomics), cell and tissue-wide protein expression (proteomics), metabolite profiling (metabonomics), and bioinformatics. These studies can be grouped as ‘‘-omics’’ study, which could be applied to various kinds of samples and species.

 

KEY WORDS: toxicogenomics, microarray, proteomics.

 

INTRODUCTION

Despite great advances in our understanding of biological processes since the discovery of DNA in the 1950s, there is still much to learn about how environmental agents, including chemicals, act within the bodies of animals and humans to cause disease. The decoding of the human genome has accelerated that learning. Now a set of new technologies is enabling scientists to get more information than ever before about basic cellular processes. This  new information is already being put to use in improving diagnostic tests and therapeutic agents in clinical medicine. In the near future it promises to improve our ability to measure, and predict, the effects of chemicals on human health.

 

Collectively, these technologies have been termed the "omics," because this suffix is added to roots describing the particular part of the cellular machinery being studied: "Toxicogenomics" refers to the responses of genes to toxic exposures; "Proteomics" indicates the responses of proteins; and "Metabonomics" refers to the responses of metabolites. This paper will focus on the applications of the "omics" to chemical safety and regulation. Applications to drug development (i.e., "pharmacogenomics") illustrate how the pharmaceutical field is advancing the science in ways that are relevant to understanding and regulating chemical toxicity. 2

 

To obtain information about how chemicals affect human health, scientists traditionally have focused on measuring exposures and looking for adverse health outcomes, from poor performance on neuropsychological tests to cancer and death. The scientists' two main tools are toxicologic studies, conducted on animals or cultured cells in laboratories, and epidemiologic studies, which observe differences in diseases among groups of people. Most toxicologicalstudies intentionally expose experimental animals to controlled doses, and then look for effects such as tumors, behavior changes, altered reproductive function,or changes in blood proteins or the microscopic appearance of tissues that indicate organ damage. Newer, "in vitro" toxicological tests use cultured cells or tissues instead of whole animals to gain insight into biological responses to toxic exposures. Epidemiological studies typically start by observing diseases within groups of people, and then attempt to estimate past exposure. These method have provided a great deal of information over the past decades to help protect the public from harm, but also have significant limitations. Both traditional approaches tell only part of the story: the beginning (the chemical exposure) and the end (the resultant illness or pathologic change). They provide less information about the mechanisms of toxicity, or about the many intermediate steps that occur before the visible damage is done. 1-2

 

That black box, as biologists call it, is now being opened far wider than has previously been possible. Cutting-edge techniques are more thoroughly investigating not just the end-stage, externally observable effects of chemicals on rats or humans, but also their more subtle and incremental effects on the inner workings of individual cells. With these new insights, medical scientists are increasingly able to detect diseases, such as cancer, in their earlystages, and are now working on interventions to stop disease at these earlier points.

 

Different industries see somewhat different futures in the omics crystal ball. Closely regulated pharmaceutical companies anticipate a golden age of accelerated research and development for a wide array of drugs, including ones tailored to an individual  genetic and metabolic profile. The chemical industry, which faces much lower regulatory hurdles and fewer testing requirements for new chemicals, is more restrained in its enthusiasm. Some in the chemical industry fear that false alarms might be set off by the detection of half understood and ultimately harmless cellular responses, forcing non-toxic products off the market. Both industry and academic scientists speak of the need for the public to understand the benefits as well as the limitations of this new science.

 

In addition to providing insight into the cause and progress of diseases, including those caused by chemical exposure, these technologies are also being used to understand how individuals differ in their responses to chemicals, whether therapeutic or toxic agents. By detecting tiny differences in the genes (and thus the proteins) involved in responding to chemicals, scientists are gradually discovering why some people get sick from exposures or experience side effects from drugs, while others don't. This field of "pharmacogenetics" will help pharmaceutical companies and physicians tailor individualized drugs and dosages to a patients genetic profile. For those concerned with chemical safety, "toxicogenetics" will aid our understanding of how individuals vary in their susceptibility to harm from chemical exposures

 

The public-interest community has a critical role to play in helping guide the application of this powerful new science. As with all new technologies, alongside the societal benefits come societal risks. One future benefit is likely to be faster and less expensive toxicologic testing that relies less on the use of animals and reduces uncertainties in how chemicals affect human health. But public-interests groups must engage in the science-policy process to ensure that shortcuts aren't taken and public-health protection compromised. Similarly, the fields of pharmacogenetics and toxicogenetics promise individualized protection from harm, but individual genetic information must be collected and used in such a way that people's privacy, insurability and employability are protected. In order to engage effectively, public-interest groups must understand these new technologies in detail, their strengths as well as their limitations. But at present, few within the public-interest community have engaged or fully educated themselves.

 

This report is intended to introduce those concerned with protecting the public interest to what the omics technologies are, how these technologies may change the way chemicals are regulated, and who is doing what within the different fields. The report also outlines a role for the public-interest community in helping to ensure that these techniques are used to promote, rather than undercut, health-protective policies.

 

Inside the biological black box Cells, organs and organisms have a variety of mechanisms for coping with toxic insults. Some are built into the structure of the cell, such as the way genes are protected  within the cell's nucleus from toxins that might enter the cytoplasm. Other defense mechanisms are more dynamic, involving changes in gene and protein expression: A cell might switch on a gene to produce an enzyme that cleaves the toxic molecule, or make a protein to bind and store a heavy metal. It is these defensive processes and the ways in which they sometimes fail to protect the body from chemical harm that are being made visible by the new techniques. Three selected categories of these processes are described below.

 

1. Metabolism and excretion:

Metabolic enzymes convert toxic chemicals into substances the body can excrete. Many of these enzymes can be induced, meaning that when the cell is exposed to the toxin, it makes more copies of the useful enzyme. (Alcohol dehydrogenase, for example, is induced by alcohol, which is why a regular drinker may be able to hold his liquor better than a teetotaler who starts imbibing.) The genetic signal to ramp up production can now be measured, as well as the increased concentration of that enzyme within the cell. If the cell cannot respond quickly or massively enough, damage is done to the toxins target, whether it is DNA, a structural protein, a receptor involved in signaling within or between cells, or any of the thousands of other molecules critical to cellular function. In some cases, the defense mounted by the cell is itself harmful, if the now-abundant enzyme produces a toxic intermediate or accelerates metabolism of other chemicals into toxic forms. This problem is especially serious when there are multiple toxic insults and therefore chemical interaction among several defenses.

 

2. Storage and binding:

Some chemicals, rather than being chemically converted and eliminated, get stored within the body, bonded to proteins that prevent them from interacting with and damaging vital targets. These storage proteins, like the metabolic enzymes, are often produced by a cell in larger quantities in response to exposure to the chemical they bind. Upon exposure to cadmium, for instance, cells in the liver make more copies of a metal-binding protein called metallothionein. So long as there is enough storage capacity in the liver for the metallothionein-cadmium complex, the kidney is spared from cadmium toxicity. If that storage capacity is exceeded, however, the extra cadmium reaches the kidney, causing damage to the kidney tubules. Scientists can now detect extra copies of the gene that codes for the protein metallothionein or the protein itself in cells that have been exposed to cadmium.

 

3."Stress responses":

More generalized responses to toxic damage include the creation of special molecules that repair cell damage or help ensure proteins are folded properly. If enough toxic damage occurs, cells may choose to stop their cycle of division, or if the damage is severe enough, even initiate controlled cell death ("apoptosis"). This mechanism can be used to eliminate cells with DNA damage that may otherwise result in cancer. These stress responses all involve the turning on and off of genes and the generation of more or fewer copies of a large number of proteins. 1-2

 

TECHNOLOGIES IN TOXICOGENOMICS:

1) Gene Expression Profiling:

The first approach, called a gene expression micro-array, builds on the newly expanded understanding of the human genome (as well as genomes of other species) and on the improved techniques for rapidly synthesizing and copying strands of RNA and DNA. Messenger RNA (mRNA) is key: made only when a gene is switched on, it is the carrier of a gene's information to the cells protein making machinery. Thus, messenger RNA functions like the lights on an old fashioned switchboard, indicating which of the many possible circuits are in use. Gene micro-array assays identify which lights are on, and even how bright they are, by measuring which genes have been "expressed" into messenger RNA strands.

 

The usual first step in running an assay is exposing animals to a chemical or other stressor. Some studies use cells that have been grown in the laboratory; studies also could be conducted with humans who have been exposed to the stressor. Researchers isolate all the messenger RNA from the exposed tissues or cells and convert them into single-stranded DNA through a process known as reverse transcription. As multiple copies of these DNA strands are made, they are labeled using radioactive compounds or special dyes. Next, the strands are mixed with a large array of single-stranded stretches of DNA, in which known genes are arranged in a known pattern. The single strands of DNA from the test subjects then bind to their counterpart DNA strands to form more stable, double-stranded DNA. By comparing patterns from exposed and unexposed subjects, investigators can tell which genes have been turned on or off (or dialed up or down) by the chemical. Genes rarely work alone; more often whole orchestras of genes are activated or shut down in concert. These new techniques liberate scientists from tracking the activity of just one gene at a time. Instead, they can observe thousands of   genes at once on a single gene chip slide, seeing all the ones that are turned on (or off) in response to a particular chemical or stressor.

 

Gene expression changes associated with signal pathway activation can provide compound-specific information on the pharmacological or toxicological effects of a chemical. A standard method used to study changes in gene expression is the Northern blot 3. An advantage of this traditional molecular technique is that it definitively shows the expression level of all transcripts (including splice variants) for a particular gene. This method, however, is labor intensive and is practical for examining expression changes for a limited number of genes. Alternate technologies, including DNA microarrays, can measure the expression of tens of thousands of genes in an equivalent amount of time4. DNA micro arrays provide a revolutionary platform to compare genome-wide gene expression patterns in dose and time contexts. There are two basic types of microarrays used in gene expression analyses: oligonucleotide-based arrays5 and cDNA array6 .Both yield comparable results, though the methodology differs. Oligonucleotide arrays are made using specific chemical synthesis steps by a series of photolithographic masks , light, or other methods to generate the specific sequence order in the synthesis of the oligonucleotide. The result of these processes is the generation of high-density arrays of short oligonucleotide (~ 20-80 bases) probes that are synthesized in predefined positions. cDNA microarrays differ in that DNA sequences (0.5-2 kb in length) that correspond to unique expressed gene sequences, are usually spotted onto the surface of treated glass slides using high speed robotic printers that allow the user to configure the placement of cDNAs on a glass substrate or chip.

 

Spotted cDNAs can represent either sequenced genes of known function, or collections of partially sequenced cDNA derived from expressed sequence tags (ESTs) corresponding to messenger RNAs of genes of known or unknown function .

 

Figure17

 

Any biological   sample   from which high quality RNA. can be isolated may be used for microarray analysis to determine differential gene expression levels. For toxicology studies, there are a number of comparisons that might be considered. For example, one can compare tissue extracted from toxicant treated organism versus that of vehicle exposed animals. In addition, other scenarios may include the analysis of healthy versus diseased tissue or susceptible versus resistant tissue. For spotted cDNA on glass platforms, differential gene expression measurements are achieved by a competitive, simultaneous hybridization using a two-color fluorescence labeling approach8 .Multi-color based labels are currently being optimized for adequate utility. Briefly, isolated RNA is converted to fluorescently labeled “targets” by a reverse transcriptase reaction using a modified nucleotide, typically dUTP or dCTP conjugated with a chromophore. The two RNAs being compared are labeled with different fluorescent   tags, traditionally either Cy3 or Cy5, so that each RNA has a different energy emission wavelength or color when excited by dual lasers. The fluorescently labeled targets are mixed and hybridized on a microarray chip. The array is scanned at two wavelengths using independent laser excitation of the two fluors, for example, at 632 and 532 nm wavelengths for the red (Cy5) and green (Cy3) labels. The intensity of fluorescence, emitted at each wavelength, bound to each spot (gene) on the array corresponds to the level of expression of the gene in one biological sample relative to the other. The ratio of the intensities of  the toxicant- exposed versus control samples are calculated and induction/repression of genes is inferred. Optimal microarray measurements can detect differences as small as 1.2 fold increase or decrease in gene expression. Although the theoretical applications seem endless, DNA microarrays have certain limitations. These measurements are only semiquantitative due to a number of factors, including cross hybridization and sequence specific binding anomalies. Another limitation is the number of samples that can be processed efficiently at a time. Processing and scanning samples may take several days and generate large amounts of information that can take considerable time to analyze. Automation is being applied to microarray technology, and new equipment such as the automated hybridization stations and auto-loaded scanners will allow higher throughput analysis. To overcome these limitations, one can combine microarrays with quantitative polymerase chain reaction (QPCR) or Taqman and other technologies in development to monitor the expression of hundreds of genes in a high throughput fashion8. This will provide more quantitative output that may be crucial for certain hazard identification processes. In the QPCR9 assay one set of primers is used to amplify both the target gene cDNA and another neutral DNA fragment, engineered to contain the desired gene template primers, which competes with the target cDNA fragment for the same primers and acts as an internal standard. Serial dilutions of the neutral DNA fragment are added to  PCR amplification  reactions containing constant amounts of experimental cDNA samples. The neutral DNA fragment utilizes the same primer as the target cDNA but yields aPCR product of different size. QPCR can offer more quantitative measurements than microarrays do because measurements may be made in “real time” during the time of the amplification and within a linear dynamic range. The PCR reactions may be set up in 96 or 384-well plates to provide a high throughput capability.

 

2) Expression Profiling of Toxicant Response:

The validity and utility of analysis of gene expression profiles for hazard identification depends on whether different profiles correspond to different classes of chemicals 11 and whether defined profiles may be used to predict the identity/properties of unknown or blinded samples derived from chemically treated biological models12 Gene expression profiling may aid in prioritization of compounds to be screened in a high throughput fashion and selection of chemicals for advanced stages of toxicity testing in commercial settings. In one effort to validate the toxicogenomic strategy, Waring and coworkers13 conducted studies to address whether compounds with similar toxic  mechanisms produced similar transcriptional alterations. This hypothesis was tested by generating gene expression profiles13 for 15 known hepatotoxicants in vitro (rat hepatocytes) and in vivo (livers of male Sprague-Dawley rats) using microarray technology. The results from the in vitro studies showed that compounds with similar toxic mechanisms resulted in similar but distinguishable gene expression profiles  They took advantage of the variety of hepatocellular injuries (necrosis, DNA damage,cirrhosis, hypertrophy, hepatic carcinoma) that were caused by the chemicals and compared pathology endpoints to the clustering output of the compounds’ gene expression profiles. Their analyses showed a strong correlation between the histopathology, clinical chemistry, and gene expression profiles induced by the various agents13. This suggests that DNA microarrays may be a highly sensitive technique for classification of potential chemical effects.

 

Figure210

 

In another study, gene expression alterations in Sprague Dawley rat livers were measured for known and unknown compound treatments. This exercise revealed that it is possible to use previously derived gene expression profiles to characterize unknown compounds. In this study, correct, positive predictions regarding the nature of 12 out of 13 of the blinded samples14 were made .Multiple statistical and computational approaches such as hierarchical clustering15, principal component analysis16, and set pair-wise correlation16 were used to distinguish gene expression profiles derived from rat livers treated with different class chemicals and different durations of exposure .Other computational methods such as linear discriminant analysis17, single gene ANOVA 18 and genetic algorithm/K-nearest neighbor  were useful in revealing single or groups of highly discriminatory/informative genes whose expression pattern could distinguish gene expression patterns corresponding to different chemical treatments. Blinded samples that exhibited high similarity  to known samples, as determined by set pair-wise correlation, were considered to tentatively share similar properties/ identities.

 

Figure310

 

3) Mechanistic Inference from Toxicant Profiling:

An extension of the use of toxicogenomics approaches is the better understanding of the mechanisms of toxicity. Bulera and coworkers 19 identified several groups of genes reflective of mechanisms of toxicity and related to a hepatotoxic outcome following treatment. An example of the advantage of using a toxicogenomics approach to understand mechanisms of chemical toxicity was the observation that microcystin-LR and phenobarbital, both of which are liver tumor promoters, induced a parallel set of genes 19 .Based on this information the authors speculated that liver tumor promotion by both compounds may occur by similar mechanisms. Such observations derived through the application of microarrays to toxicology will broaden our understanding of mechanisms and our ability to identify compounds with similar mechanisms of toxicity. The authors also confirmed toxicity in the animals using conventional methods such as histopathology, modulations in liver enzymes and bilirubin levels and related these effects to gene expression changes; however, it would have been advantageous to utilize gene expression data to map relevant pathways depicting mechanism(s) associated with the hepatotoxicity of each compound 20 ,Collectively, in the future, researchers may attempt to build “transcriptome” or “effector maps” that will help to visualize pathway activation21 .

 

Finally, Huang and coworkers22 utilized cDNA microarrays to investigate gene expression patterns of cisplatin-induced nephrotoxicity. In these studies, rats were treated daily for 1 to 7 days with cisplatin at a dose that resulted in necrosis of the renal proximal tubular epithelial cells but no hepatotoxicity at day 7. Gene expression patterns for transplatin, an inactive isomer, was examined and revealed little gene expression change in the kidney, consistent with the lack of nephrotoxicity of the compound. Cisplatin -induced gene expression alterations were reflective of the histopathological changes in the kidney i.e. gene related to cellular remodeling, apoptosis, and alteration of calcium homeostasis, among others which the authors describe in a putative pathway of cisplatin nephrotoxicity.

 

4) Protein Expression / Proteomics:23

Analyzing the thousands of different proteins in a tissue or cell can either be done through techniques that carefully isolate and "fingerprint" each individual protein (separation-based techniques) or through techniques that identify and fingerprint proteins within the milieu of intracellular fluid. When separationbased techniques are used, a key challenge is the isolation of individual proteins in such a way that they are not altered or damaged, and so can be properlyidentified. Once separated effectively, they can be quantitatively measured. Techniques that do not require separation have been developed more recently, and may ultimately be faster and easier to perform than separation-based techniques.

 

SEPARATION-BASED PROTEOMICS TECHNIQUES:

To study protein expression in a given biological structure such as a tissue or a cell, the proteins can first be separated from the rest of the cellular structures, such as organelles, and from compounds, such as nucleic acids. This is referred to as protein solubilization. Either chemical or mechanical methods are used to solubilize proteins from the integrant components of the cell/tissue of interest. Once the proteins are solubilized, they can then be separated from the nonsolubilized material by centrifugation. Centrifugation is a laboratory technique used to separate mixture samples, such as cell/tissue samples, into homogenous components by spinning the mixture sample at high speed.The next step after protein solubilization is protein separation. Protein separation is the separation of different proteins from one another in the solubilized protein solution/mixture. A technique called two-dimensional polyacrylamide gel electrophoresis (2D PAGE, 2-DE) is most commonly used to separate proteins.

 

The gel electrophoresis method separates large molecules, such as proteins, on the basis of their size, electrical charge (electric properties), and other physical properties. It works much like a multilayered filtration system, capturing proteins with specific physical/chemical properties at each filtration layer. The solubilized proteins are forced through a span of gelatinous medium, polyacrylamide gel. As they pass through, the proteins become suspended in specific locations in the gel depending on their particular chemical and physical properties. In the 2D gel method, proteins are separated in two distinct steps, each step separating the proteins in two different dimensions based on specific protein characteristics.

 

First, the proteins are separated in one dimension according to their unique electrical charge. In the second step (and second dimension), they are separated according to their size. In order to visualize or locate the suspended/captured proteins in the 2D gel, the protein solution is stained with chemical dyes. The proteins are, in essence, suspended on an X-Y plot and made ready for the next step, image analysis. A non-gel-based separation technique used in proteomics is two-dimensional high performance liquid chromatography (HPLC). Like the 2D gel, this method separates proteins in each of two dimensions according to two different protein characteristics. The proteins are separated in the first dimension based on size using a method called exclusion chromatography. In this step the solubilized protein mixture is applied to a column filled with a semisolid gel, which fractions the mixture into components of different-size proteins. In the second dimension, the proteins are separated by the reverse-phase HPCL method. This method separates the proteins based on how well they are absorbed by the particular solids packed in a column. The more strongly adsorbed proteins reach the bottom of the column later than do the less strongly adsorbed ones. As in the 2D PAGE method, the proteins are suspended in discrete spots on an X-Y plot for analysis.

 

An essential aspect of proteomics studies is looking for differences in protein expression between the normal and abnormal cell/tissue samples. Once the sample protein mixture is separated, the protein expression patterns can be analyzed for any such differences using computer-based image analysis. Various gel analysis software is used for this purpose.

 

Although 2D gel electrophoresis isolates the proteins and provides some information about the protein characteristics, such as mass, this information usually is not sufficient to identify most proteins accurately. After the proteins have been separated, they are identified and characterized using various techniques. One common technique, peptide-mass fingerprinting, identifies a protein by matching the mass of its component parts to a reference protein with a similar component mass found in an existing protein database. In this technique, a protein is isolated from the separation and chemically fractioned into specific subunits. The mass of these protein subunits is then measured using a method called mass spectrometry. Finally, the identity of the protein is determined when a matching subunit mass is found in an existing database that correlates the subunit mass with the protein.s identity.

 

NON-SEPARATION-BASED PROTEOMICS TECHNIQUES:

Non-separation-based techniques also are used in proteomics. Recent developments in mass spectrometry have enabled proteins to be identified directly by this method without needing to separate them first. In an example of such an approach to proteomics, the protein expression patterns of slices of frozen tissue samples are analyzed using mass spectrometry. In this method, the masses of different proteins are determined from the samples, resulting in a mass profile of the tissue. This mass profile is then compared with the mass profile of healthy tissue for possible differences.

 

Protein microarrays are another non-separation-based proteomics technique (Tyers) similar to DNA microarray technology. Proteins are isolated from a tissue/cell sample of interest and are applied to a solid or fluid background. The background, called a chip, arranges the proteins in the solution based on chemical characteristics. This arrangement of proteins is the protein array. Once the protein array is created from the sample, various compounds (e.g., enzymes, other proteins) are added to the array in order to detect the proteins. Interactions with them. This technique can elucidate protein modifications and enzymatic activities. Since a protein.s structural and chemical integrity is highly sensitive to environmental factors such as temperature and pH, it is difficult to preserve proteins in their biologically active shape and form.

 

5) METABONOMICS:24

Just as changes in protein expression are .downstream. to gene expression changes and thus closer to actual reflections of toxicity or damage, changes in the relative quantities of the whole range of biologically important molecules within a cell follow the changes in protein expression and, for some toxic effects, may be closer to reflecting the actual toxicity. These small molecules include the carbohydrates that provide energy, the amino acids and nucleic acids that cycle in and out of proteins and genetic material, respectively, and other cogs in the cellular machinery known collectively as metabolites. Just as new technologies have allowed the simultaneous characterization of huge numbers of genes and proteins, the characterization of .metabolic profiles. encompassing a huge range of molecules within cells is being developed. The ultimate aim for many scientists is to use computational tools to combine analyses of gene expression, protein expression, and metabolite changes to create a complex, sophisticated model of cellular processes and responses to environmental stimuli.

 

Two terms have been coined to represent the analysis of a wide range of metabolites for understanding disease processes and toxicity: metabolomics and metabonomics. Metabolomics has been described as the study of .metabolic regulation and fluxes in individual cells or cell types,. whereas metabonomics involves .the determination of systemic biochemical profiles and regulation of function in whole organisms by analyzing biofluids and tissues.. Although metabolomics is becoming a useful research tool, metabonomics.characterizing metabolic profiles on a larger scale.is currently felt to provide more useful information for investigating the systemic toxicity of xenobiotics.

 

TECHNIQUES:

Two laboratory techniques are currently being used in metabonomic studies: mass spectroscopy (MS) and nuclear magnetic resonance imaging (NMR). The biological material studied for metabolites can be essentially any biological fluid, from intracellular fluid to urine, saliva, or blood plasma. Mass spectroscopy requires removal and some separation of metabolite from the fluid sample, usually done with high pressure liquid chromatography (HPLC). Only NMR has the capacity to study compounds within intact tissues, or intracellular fluid. The current techniques typically obtain multiple samples of blood or urine at several points in time and characterize their composition using NMR imaging. To characterize their specific composition and structure, NMR takes advantage of the different responses of the different atoms within a molecule when it is exposed to powerful magnetic fields. Because NMR uses changes in magnetic fields at a distance and does not require physically separating individual chemicals (the technology is similar to that used for diagnostic magnetic resonance imaging [MRI] studies in humans), NMR can be used on intact tissues. In fact, a specific technique called magic angle spinning NMR (MASNMR) has been devised to characterize large numbers of different metabolites in intact tissues. Depending on the techniques used and the questions being asked, NMR can be used either to quantitatively characterize the entire range of metabolites present without specifically identifying the compounds (metabolic .profiling.) or to identify specific structures of compounds within the pool of metabolites. The latter requires more involved and time-consuming chemical analysis. Currently, the broader, pattern-recognition use of NMR-based metabonomics is being used to detect toxicity in drug development, in either general (normal versus abnormal metabolic profiles) or specific organ toxicity (e.g., classifying different types of hepatotoxicity) 25 .Sophisticated, multivariable statistical techniques are required to analyze the large amounts of data generated by these types of studies. The statistical analysis has been termed either unsupervised (i.e., blindly applying statistical techniques to the data without prior assumptions) or supervised (i.e., using comparisons with known compounds or other preexisting datasets). Often the data analysis begins with .unsupervised. analysis and then moves on to .supervised. analysis. Ultimately, as with gene and protein expression analyses, the value is obtained by comparing the experimental results with a large, well-characterized body of validated, standardized data.

 

TECHNICAL LIMITATIONS OF METABONOMIC PROFILING:

1. The biofluid or tissue sampled must reflect the target organ of toxicity. For example, urine or blood samples may not be sensitive to metabolic changes associated with neurotoxicity. The capacity to do analyses in intact animals may ultimately allow for sophisticated analyses to be performed repeatedly on target tissues without disrupting the natural progression of toxicity.

 

2. As with the other omic technologies, metabonomics must detect critical changes associated with toxicity that may be orders of magnitude smaller than background changes and fluxes. This is a problem of both instrumentation (to be able to detect these small changes) and interpretation (to be able to distinguish changes associated with toxicity to changes that represent background responses or healthy, adaptive responses). 3. Because of the rapidity with which metabolites change, the timing of sampling in metabonomics is critical. The ability to measure metabolites repeatedly and non-invasively in the same subject, however, is an advantage to understanding the time course of toxicity.

 

4. Perhaps even more so than proteomics, the laboratory instrumentation required for high-throughput and efficient metabonomic analyses is still being developed and is not as far advanced as gene expression assays.

 

6) Metabolic Profiling of Toxicant Response:

Robertson and coworkers evaluated the feasibility of a toxicogenomic strategy by generating NMR spectra of urine samples from male Wistar rats treated with different hepatotoxicants (carbon tetrachloride, α- naphthyl isothiocyanate) or nephrotoxicants (2- bromoethylamine, 4-aminophenol) 26. Principal component analysis (PCA) of the urine spectra was in agreement with clinical chemistry data observed in blood samples taken from the chemically exposed animals at various time points of chemical exposure. Furthermore, PCA analysis suggested low dose effects with two of the chemicals, which were not evident by clinical chemistry or microscopic analyses. This conclusion was demonstrated with the 150 mg/kg 2- bromoethanolamine treated animals where only 5 of 8 of the animals had creatinine or BUN levels, at day 1, that were outside the normal range, while all animals exhibited diuresis and principal component analysis was clearly  indicative of a consistent effect in all 8 animals.

 

In another seminal study, 1H NMR spectroscopy was used to characterize the time-dependency of urinary metabolite perturbation in response to toxicant exposure. Male Han Wistar or Sprague Dawley rats were treated with either control vehicle or one of 13 model toxicants or drugs that predominantly target liver or kidney. The resultant 1H NMR spectra were analyzed using a probabilistic neural network approach 27. A set of 583 of the 1310 samples were designated as a training set for the neural network, with the remaining 727 independent cases employed as a test set for validation. Using these techniques, the 13 classes of toxicity, together with the variations associated with strain, were highly distinguishable (>90%). An important aspect of this study is the sensitivity of the methodology towards strain differences that will be useful in investigating the genetic variation of metabolic responses across multiple animal models and may also prove useful in identifying susceptible subpopulations.

 

7) Localization of Gene Expression:

In order to help understand the role of genes or proteins in toxic processes, specific cellular localization of these targets is needed. Pathological alterations such as necrosis and vasculitis are often localized to specific regions of an organ or tissue. It is not known whether subtle gene or protein expression alterations associated with these events are detectable when the whole organ is used for preparation of samples for further analyses. Laser capture microdissection (LCM) 28 is one method used to precisely select affected tissue thereby enhancing the probability of observing gene or protein expression changes associated with pathologically altered regions. For example, profiling specific pathological lesions that are considered to be precursors to cancer may help in understanding how chronic chemical exposure leads to tumor development. However, for some tissues or laboratories, LCM may not be technically feasible to discern gene expression in cellular subtypes. A technical challenge may be that the affected area or region is too small for enough RNA or protein to be extracted for later analysis, or the extra manipulation compromises the quality of harvested samples. Therefore, when deriving samples from gross organ or tissue samples for expression analysis, one often has no measure of specific gene or protein expression alterations attributable to the pathological change that was diluted in the assayed organ or tissue. When an organ, or part thereof, is harvested from a chemically exposed animal, the response to the insult is almost always dilute to a certain extent because not every area or cell is responsive to treatment. Similarly, tumor samples or other diseased tissues may contain other significant cell types including stroma, lymphocytes, or endothelial cells. Dilution effects are also involved when a heterogeneous expression response occurs. For example, even in a homogeneous cell population, each individual cell may have a very different quantitative response for each gene expression change. In order to address this problem, we evaluated the sensitivity of cDNA microarrays in detecting diluted gene expression alterations thus simulating relatively minor changes in the context of total organ or tissue. We found statistically significant differences in the expression of numerous genes between two cell lines (HaCaT and MCF-7) that continued to be detected even after a 20-fold dilution of original changes 29 showing that microarray analyses, when conducted in a manner to optimize sensitivity and reduce noise, may be used to determine gene expression changes occurring in only a small percentage of cells sampled.

 

Finally, once important biomarkers are hypothesized from genomics and proteomics technologies, candidate target genes or proteins can then be monitored using more high-throughput, cost-effective immunohisto chemical analyses in the form of tissue microarrays. Tissue microarrays are microscope slides where thousands of minute tissue samples from normal and diseased organisms can be tiled in an array fashion. The tissue microarrays can then be probed with the same fluorescent antibody to monitor the expression, or lack of, certain candidate markers for exposure or disease onset.

 

DATA MANAGEMENT AND STORAGE:

Making the data generated by individual gene and protein expression studies accessible and useful to the broader scientific community poses significant challenges. First, the quantity and the variety of the data are enormous, particularly compared with, for example, gene sequence data. Second, unlike sequence or other types of data, many of these data have no objective or absolute unit of measurement. Many of the gene expression data from microarrays, for example, are recorded as a ratio of signal strength between the exposed and the control experimental subjects. One means of standardizing the collection and storage of microarray data was initiated by the .grassroots. Microarray Gene Expression Database group (MGED) (www.mged.org). MGED has named the project Minimum Information About a Microarray Experiment, or MIAME, and has established both the standards for the database and pilot software to demonstrate it. The MIAME information set attempts to identify all the information necessary either to recreate or to interpret a microarray gene expression assay, including details of the laboratory techniques, species of animals used, testing conditions, and some measure of quantification and reliability of the results. Much of this information is recorded as annotations to the raw data, and the team developing MIAME has given considerable attention to the need for standardized terms and ways of describing the different parameters involved. In addition to standardizing the data and aiding its interpretation and comparison with other studies, the MIAME model database is being designed to facilitate rapid screening of the data and data .mining..

Similar efforts are under way in the area of proteomics.

 

Experimental Design and Data Analysis:

The greatest challenge of toxicogenomics is no longer data generation but effective collection, management, analysis, and interpretation of data. Although genome sequencing projects have managed large quantities of data, genome sequencing deals with producing a reference sequence that is relatively static in the sense that it is largely independent of the tissue type analyzed or a particular stimulation. In contrast, transcriptomes, proteomes, and metabolomes are dynamic and their analysis must be linked to the state of the biologic samples under analysis. Further, genetic variation influences the response of an organism to a stimulus. Although the various toxicogenomic technologies (genomics, transcriptomics, proteomics, and metabolomics) survey different aspects of cellular responses, the approaches to experimental design and high-level data analysis are universal

 

EXPERIMENTAL DESIGN:

The types of biologic inferences that can be drawn from toxicogenomic experiments are fundamentally dependent on experimental design. The design must reflect the question that is being asked, the limitations of the experimental system, and the methods that will be used to analyze the data. Many experiments using global profiling approaches have been compromised by inadequate consideration of experimental design issues. Although experimental design for toxicogenomics remains an area of active research, a number of universal principle have emerged. First and foremost is the value of broad sampling of biologic variation 30. Many early experiments used far too few samples to draw firm conclusions, possibly because of the cost of individual microarrays. As the cost of using microarrays and other toxicogenomic technologies has declined, experiments have begun toinclude sampling protocols that provide better estimates of biologic and systematic variation within the data. Still, high costs remain an obstacle to large, population-based studies. It would be desirable to introduce power calculations into the design of toxicogenomic experiments 31. However, uncertainties about the variability inherent in the assays and in the study populations, as well as interdependencies among the genes and their levels of expression, limit the utility of power calculations.

 

A second lesson that has emerged is the need for carefully matched control standardization in any experiment. Because microarrays and other toxicogenomic technologies are extremely sensitive, they can pick up subtle variations in gene, protein, or metabolite expression that are induced by differences in how samples are collected and handled. The use of matched controls and randomization can minimize potential sources of systematic bias and improve the quality of inferences drawn from toxicogenomic datasets

 

FIG. 4 Overview of the workflow in a toxicogenomic experiment.

 

A related question in designing toxicogenomic experiments is whether samples should be pooled to improve population sampling without increasing the number of assays 32  Pooling averages variations but may also disguise biologically relevant outliers—for example, individuals sensitive to a particular toxicant. Although individual assays are valuable for gaining a more robust estimate of gene expression in the population under study, pooling can be helpful if experimental conditions limit the number of assays that can be performed. However, the relative costs and benefits of pooling should be analyzed carefully, particularly with respect to the goals of the experiment and plans for follow-up validation of results.  Generally, the greatest power in any experiment is gained when as many biologically independent samples are analyzed as is feasible. Universal guidelines cannot be specified for all toxicogenomic experiments, but careful design focused on the goals of the experiment and adequate sampling are needed to assess both the effect and the biologic variation in a system. These lessons are not unique to toxicogenomics. Inadequate experimental designs driven by cost cutting have forced many studies to sample small populations, which ultimately compromises the quality of inferences that can be drawn.

 

TOXICOGENOMICS AND BIOMARKER DISCOVERY:

Within the drug industry, there is an acute need for effective biomarkers that predict adverse events earlier than otherwise could be done in every phase of drug development from discovery through clinical trials, including a need for noninvasive biomarkers for clinical monitoring. There is a widespread expectation that, with toxicogenomics, biomarker discovery for assessing toxicity will advance at an accelerated rate. Each transcriptional “fingerprint” reflects a cumulative response representing complex interactions within the organism that include pharmacologic and toxicologic effects. If these interactions can be significantly correlated to an end point, and shown to be reproducible, the molecular fingerprint potentially can be qualified as a predictive biomarker. Several review articles explore issues related to biomarker assay development and provide examples of the biomarker development process 33 .The utility of gene expression-based biomarkers was clearly illustrated by van Leeuwen and colleagues’ 1986 identification of putative transcriptional biomarkers for early effects of smoking using peripheral blood cell profiling34 .Kim and coworkers also demonstrated a putative transcriptional biomarker that can identify genotoxic effects but not carcinogenesis using lymphoma cells but noted that the single marker presented no clear advantage over existing in vitro or in vivo assays 35 .Sawadaet al. discovered a putative transcriptional biomarker predicting phospholipidosis in the HepG2 cell line, but they too saw no clear advantage over existing assays 36 .In 2004, a consortium effort based at the International Life Sciences Institute’s Health and Environmental Sciences Institute identified putative gene-based markers of renal injury and toxicity 37. As has been the case for transcriptional markers, protein-based expression assays have also shown their value as predictive biomarkers. For example, Searfoss and coworkers used a toxicogenomic approach to identify a protein biomarker for intestinal toxicity 38 .Exposure biomarker examples also exist. Koskinen and coworkers developed an interesting model system in rainbow trout, using trout gene expression microarrays to develop biomarkers for assessing the presence of environmental contaminants 39 .Gray and colleagues used gene expression in a mouse hepatocyte cell line to identify the presence of aromatic hydrocarbon receptor ligands in an environmental sample 40

 

Future of Predictive Toxicology:

From the rapid screening perspective, it is neither cost effective, nor is it practical to survey the abundance of all genes, proteins, or metabolites in a sample of interest. It would be prudent to conduct cheaper, more high throughput measurements on variables that are of most interest in the toxicological evaluation process. Thus, this reductionist strategy mandates the selection of subsets of genes, proteins or metabolites that will yield useful information in regards to classification purposes such as hazard identification or risk assessment. The challenge is finding out what these minimal variables are and what data we need to achieve this knowledge. Election of these subsets by surveying existing toxicology literature is inefficient because the role of most genes or proteins in toxicological responses is poorly defined. Moreover, there exists a multitude of undiscovered or unknown genes (ESTs) that might ultimately be key players in toxicological   processes. We propose the use of genes, proteins, or metabolites, that are found to be most discriminative between stressor induced-specific profiles, for efficient screening purposes. Discriminative potential of genes, proteins, or metabolites is inferred when comparing differences in the levels of these parameters across toxicant exposure scenarios. In the case of samples derived from animals treated with one of few chemicals, the levels of one gene, protein, or metabolite might be sufficient to distinguish samples based on the few classes of compounds used for the exposures. However, multiple parameters are needed to separate samples derived from exposures to a larger variety of chemical classes. Finding these discriminatory parameters requires the use of computational and mining algorithms that extract this knowledge from a database of chemical effects Linear discriminate analysis (LDA)  and single gene ANOVA  can be used to test single parameters (ex. genes) for their ability to separate profiles corresponding to samples derived from different exposure conditions (ex. Chemical identity, biological endpoint). Higher order analyses such as genetic algorithm/K-nearest neighbor (GA/KNN)  are able to find a user defined number of parameters that would, as a set, highlight the most difference between biological samples based on the levels of genes, proteins, or metabolites. Once the profile of a parameter, or a set of parameters, is found to distinguish between samples in a data set, it can be used to interrogate the identity of unknown samples for screening purposes in a high throughput fashion. It is important to keep in mind that since  these   discriminatory parameters are derived from historical data, it is possible that their status might not hold once significant volumes of new data is inputted in the database that computations are run. It is prudent to view discriminatory parameters (genes, protein, metabolites) as dynamic entities that can be updated periodically  depending on the availability of new toxicant related profiles used.

 

REFERENCE:

1.       Hisham K.H, Rupesh P.A, Richard SP, et al A, An Overview of Toxicogenomics, Curr. Issues Mol. Biol. (2002) 4: 45-56.

2.       John B ,. Triola D.Y, Toxicogenomics: Harnessing the Power of New Technology, 21: 87-97.

3.       Sambrook, J., Fritsch, E.F., and Maniatis, T. 1989. Molecular Cloning, A Laboratory Manual, Second Edition. Cold Spring Harbor Laboratory Press.

4.       DeRisi, J., Penland, L., Brown, P.O., et al. 1996. Use of a cDNA microarray to analyse gene expression patterns in human cancer [see comments]. Nat. Genet. 14: 457-460.

5.       Lockhart, D.J., Dong, H., Byrne, M.C.,et al Expression monitoring by hybridization to high-density oligonucleotidearrays. Nat. Biotechnol. 14: 1675-1680.

6.       Schena, M., Shalon, D., Davis, R.W., and Brown, P.O. 1995. Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science. 270: 467-470.

7.       http://www.niehs.nih.gov/multimedia/qt/ntc/ntcaltcaption.mov

8.       Kreuzer, K.A., Lass, U., Bohn, A., Landt, et al. 1999. Light Cycler technology for the quantitation of bcr/abl fusion transcripts. Cancer Res. 59: 3171-3174.

9.       Walker, N.J. 2001. Real-time and quantitative PCR: applications to mechanism-based toxicology. J. Biochem. Mol. Toxicol. 15: 121-127.

10.     K.-M. Lee et al. / Toxicology and Applied Pharmacology 207 (2005) S200– S208

11.     Waring, J.F., Ciurlionis, R., Jolly, R.A., Heindel, M. et al. 2001. Microarray analysis of hepatotoxins in vitro reveals a correlation between gene expression profiles and mechanisms of toxicity. Toxicol Lett. 120: 359- 368.

12.     Hamadeh, H.K., Bushel, P., Jayadev, S. et al. Gene expression analysis reveals chemicalspecific profiles. Toxicol. Sci. cross ref.

13.     Waring, J.F., Jolly, R.A., Ciurlionis, R. et al. Clustering of hepatotoxins based on mechanism of toxicity using gene expression profiles. Toxicol. Appl. Pharmacol. 175: 28- 42.

14.     Hamadeh, H.K., Bushel, P., Jayadev, S. et al. Prediction of compound signature using high density gene expression profiling. Toxicol. Sci. cross ref.

15.     Eisen, M.B., Spellman, P.T., Brown, P.O. et al. Cluster analysis and display of genome-wide expression patterns. Proc. Natl. Acad. Sci. USA. 95: 14863-14868.

16.     Johnson, R.A. and Wichern, D.W. 1998. Applied Multivariate Statistical Analysis. 4th ed. Prentice-Hall, Upper Saddle River, NJ. Jonic, S., Jankovic, T., Gajic, V., and Popovic, D. 1999. Three machine learning techniques for automatic determination of rules to control locomotion. IEEE Trans.Biomed. Eng, 46: 300-310.

17.     Solberg, H.E. 1978. Discriminant analysis. CRC Crit. Rev. Clin. Lab. Sci. 9: 209-242.

18.     Neter, J., M.H. Kutner, M.H., Nachtsheim, C.J., and Wasserman, W. 1996. Applied Linear Statistical Models. 4th ed. Irwin Press, Chicago.

19.     Bulera, S.J., Eddy, S.M., Ferguson, E. et al . RNA expression in the early characterization of hepatotoxicants in wister rats by high-density DNAmicroarrays. Hepatology. 33: 1239-1258.

20.     Hamadeh, H.K., Bushel, P., Jayadev, S. et al. Prediction of compound signature using high density gene expression profiling. Toxicol. Sci. cross ref.

21.     Tennant. R.W. The National Center for Toxicogenomics: Using new technologies to inform mechanistic toxicology. Environ. Health Perspectives. Cross ref..

22.     Huang, Q., Dunn, R.T., Jayadev, S. et al. Assessment of cisplatin-induced nephrotoxicity by microarray technology. Toxicol. Sci. 63: 196-207.

23.     John B ,. Triola D.Y, Toxicogenomics: Harnessing the Power of New Technology, 21: 87-97

24.     Roger Ulrich and Stephen H. Friend “Toxicogenomics and drug discovery: will new technologies help us produce better drugs?”, Jan 2002, Vol 1, Nature Reviews, Drug Discovery.

25.     Nicholson et al., Nature Reviews Drug Discovery2002 Feb;1(2):153-61.

26.     Robertson, D.G., Reily, M.D., Sigler, R.E. et al. Metabonomics: evaluation of nuclear magnetic resonance (NMR) and pattern recognition technology for rapid in vivo screening of liver and kidney toxicants. Toxicol. Sci. 57: 326-337.

27.     Holmes, E., Caddick, S., Lindon, J.C. et al. 1995. 1H and 2H NMR spectroscopic studies on the metabolism and biochemical effects of 2-bromoethanamine in the rat. Biochem. Pharmacol. 49: 1349-1359.

28.     Emmert-Buck, M.R., Bonner, R.F., Smith, P.D. et al 1996. Laser capture microdissection. Science. 274: 998-1001.

29.     Hamadeh, H.K., Bushel, P., Tucker, C.J. et al 2002a. Detection of diluted gene expression alterations using cDNA microarrays. Biotechniques. Cross ref

30.     Churchill, G.A. 2002. Fundamentals of experimental design for cDNA microarrays. Nat. Genet. 32(Suppl.):490-495.

31.     Simon, R., M.D. Radmacher, and K. Dobbin. 2002. Design of studies using DNA microarrays. Genet. Epidemiol. 23(1):21-36.

32.     Simon, R., M.D. Radmacher, K. Dobbin, and L.M. McShane. 2003. Pitfalls in the use of DNA microarray data for diagnostic and prognostic classification. J. Natl. Cancer.Inst. 95(1):14-18.

33.     Wagner, J.A. 2002. Overview of biomarkers and surrogate endpoints in drug development. Dis. Markers 18(2):41-46.

34.     van Leeuwen, A., P.I. Schrier, M.J. Giphart, et al 1986. TCA: A polymorphic genetic marker in leukemias and melanoma cell lines. Blood 67(4):1139-1142.

35.     Kim, J.Y., J. Kwon, J.E. Kim, et al 2005. Identification of potential biomarkers of genotoxicity and carcinogenicity in L5178Y mouse lymphoma cells by cDNA microarray analysis. Environ. Mol. Mutagen. 45(1):80-89.

36.     Sawada, H., K. Takami, and S. Asahi. 2005. A toxicogenomic approach to drug-induced phospholipidosis: Analysis of its induction mechanism and establishment of a novel in vitro screening system. Toxicol. Sci. 83(2):282-292.

37.     Amin, R.P., A.E. Vickers, F. Sistare,  et al 2004. Identification of putative gene based markers of renal toxicity. Environ. Health Perspect. 112(4):465-479.

38.     Searfoss, G.H., Jordan, W.H., Calligaro, D.O, et al 2003. Adipsin, aBiomarker of Gastrointestinal Toxicity Mediated by a Functional -Secretase Inhibitor.J. Biol. Chem. 278:46107-46116.

39.     Koskinen, H., P. Pehkonen, E. Vehniainen, et al 2004. Response of rainbow trout transcriptome to model chemical contaminants. Biochem. Biophys. Res. Commun. 320(3):745-753.

40.     Gray, J.P., T.L. Leas, E. et al. 2003. Evidence of aryl hydrocarbon receptor ligands in Presque Isle Bay of Lake Erie. Aquat. Toxicol. 64(3):343-358.

 

 

Received on 13.05.2009

Accepted on 10.06.2009     

© A&V Publication all right reserved

Research J. Pharmacology and Pharmacodynamics 2(2): March –April 2010: 131-140